Gaze-driven Object Tracking for Real Time Rendering

نویسندگان

  • Radoslaw Mantiuk
  • Bartosz Bazyluk
  • Rafal Mantiuk
چکیده

To efficiently deploy eye-tracking within 3D graphics applications, we present a new probabilistic method that predicts the patterns of user’s eye fixations in animated 3D scenes from noisy eye-tracker data. The proposed method utilises both the eye-tracker data and the known information about the 3D scene to improve the accuracy, robustness and stability. Eye-tracking can thus be used, for example, to induce focal cues via gaze-contingent depth-of-field rendering, add intuitive controls to a video game, and create a highly reliable scene-aware saliency model. The computed probabilities rely on the consistency of the gaze scan-paths to the position and velocity of a moving or stationary target. The temporal characteristic of eye fixations is imposed by a Hidden Markov model, which steers the solution towards the most probable fixation patterns. The derivation of the algorithm is driven by the data from two eye-tracking experiments: the first experiment provides actual eye-tracker readings and the position of the target to be tracked. The second experiment is used to derive a JND-scaled (Just Noticeable Difference) quality metric that quantifies the perceived loss of quality due to the errors of the tracking algorithm. Data from both experiments are used to justify design choices, and to calibrate and validate the tracking algorithms. This novel method outperforms commonly used fixation algorithms and is able to track objects smaller then the nominal error of an eye-tracker.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline syst...

متن کامل

FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR in...

متن کامل

Gaze-Dependent Depth-of-Field Effect Rendering in Virtual Environments

This paper presents gaze-dependent depth-of-field (DOF) rendering setup, consisting of high frequency eye tracker connected to a graphics workstation. A scene is rendered and visualised with the DOF simulation controlled by data captured with the eye tracker. To render a scene in real-time, the reverse-mapped z-buffer DOF simulation technique with the blurring method based on Poisson disk is us...

متن کامل

Gaze Contingent Foveated Rendering

The aim of this paper is to present experimental results for gaze contingent foveated rendering for 2D displays. We display an image on a conventional digital display and use an eye tracking system to determine the viewers gaze co-ordinates in real time. Using a stack of pre-processed images, we are then able to determine the blur profile, select the corresponding image from the image stack and...

متن کامل

Segmentation Assisted Object Distinction for Direct Volume Rendering

Ray Casting is a direct volume rendering technique for visualizing 3D arrays of sampled data. It has vital applications in medical and biological imaging. Nevertheless, it is inherently open to cluttered classification results. It suffers from overlapping transfer function values and lacks a sufficiently powerful voxel parsing mechanism for object distinction. In this work, we are proposing an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Comput. Graph. Forum

دوره 32  شماره 

صفحات  -

تاریخ انتشار 2013